Skip to main content

Home/ DISC Inc/ Group items tagged wordpress SEO

Rss Feed Group items tagged

jack_fox

The Ultimate Web Server Security Guide @ MyThemeShop - 0 views

  • They could insert links into the site to boost their SEO rankings. Hackers can make a killing selling links from exploited sites. Alternatively, a hacker could deface the site and demand money to restore it (ransom). They could even place ads on the site and use the traffic to make money. In most cases, an attacker will also install backdoors into the server. These are deliberate security holes that allow them to come back and exploit the site in the future – even if the insecure plugin has been replaced.
  • Unfortunately, under WordPress, every plugin and theme has the ability to alter anything on the site. They can even be exploited to infect other apps and sites hosted on the same machine.
  • Theme developers are often relatively inexperienced coders. Usually, they’re professional graphic artists who have taught themselves a little PHP on the side. Plugins are another popular line of attack – they account for 22% of successful hacks. Put together, themes and plugins are a major source of security trouble.
  • ...102 more annotations...
  • Each person who uses your system should only have the privileges they need to perform their tasks.
  • Don’t depend on a single security measure to keep your server safe. You need multiple rings of defense.
  • Security exploits exist at all levels of the technology stack, from the hardware up. WP White Security revealed that 41% of WordPress sites are hacked through a weakness in the web host.
  • While it’s important to use a strong password, password cracking is not a primary focus for hackers.
  • the more software you have installed on your machine, the easier it is to hack – even if you aren’t using the programs! Clearly, programs that are designed to destroy your system are dangerous. But even innocent software can be used in an attack.
  • There are 3 ways to reduce the attack surface: 1. Run fewer processes 2. Uninstall programs you don’t need 3. Build a system from scratch that only has the processes you need
  • A really good authentication system uses multiple tests. Someone could steal or guess your password. They could grab your laptop with its cryptographic keys.
  • If you want to run multiple processes at the same time, you need some way of managing them. This is basically what a kernel is. It does more than that – it handles all of the complex details of the computer hardware, too. And it runs the computer’s networking capabilities
  • programs exist as files when they are not running in memory
  • SELinux’s default response is to deny any request.
  • SELinux is extremely comprehensive, but this power comes at a price. It’s difficult to learn, complex to set up, and time-consuming to maintain.
  • AppArmor is an example of a MAC tool, although it’s nowhere near as comprehensive as SELinux. It applies rules to programs to limit what they can do.
  • AppArmor is relatively easy to set up, but it does require you to configure each application and program one by one. This puts the onus for security in the hands of the user or sysadmin. Often, when new apps are added, users forget to configure AppArmor. Or they do a horrible job and lock themselves out, so their only option is to disable the profile. That said, several distributions have adopted AppArmor.
  • Generic profiles shipped by repo teams are designed to cover a wide range of different use cases, so they tend to be fairly loose. Your specific use cases are usually more specific. In this case, it pays to fine-tune the settings, making them more restrictive.
  • GRSecurity is a suite of security enhancements
  • In the future, this could become a viable option. For now, we’ll use Ubuntu and AppArmor.
  • Apache is a user-facing service – it’s how your users interact with your website. It’s important to control this interaction too.
  • If your Apache configuration is bad, these files can be viewed as plain text. All of your code will be visible for anyone to see – this potentially includes your database credentials, cryptographic keys, and salts.
  • You can configure Apache to refuse any requests for these essential directories using .htaccess files. These are folder-level configuration files that Apache reads before it replies to a request.
  • The primary use for .htaccess files is to control access
  • If an attacker knows your WordPress cryptographic salts, they can use fake cookies to trick WordPress into thinking they have logged on already.
  • If the hacker has physical access to the computer, they have many options at their disposal. They can type commands through the keyboard, or insert a disk or USB stick into the machine and launch an attack that way.
  • When it comes to network-based attacks, attackers have to reach through one of the machine’s network ports.
  • For an attacker to exploit a system, they have to communicate to a process that’s listening on a port. Otherwise, they’d simply be sending messages that are ignored. This is why you should only run processes that you need for your site to run. Anything else is a security risk.
  • Often, ports are occupied by processes that provide no real valuable service to the machine’s legitimate users. This tends to happen when you install a large distribution designed for multiple uses. Large distros include software that is useless to you in terms of running a website. So the best strategy is to start with a very lightweight distro and add the components you need.
  • If you see any unnecessary processes, you can shut them down manually. Better yet, if the process is completely unnecessary, you can remove it from your system.
  • Firewalls are quite similar to access control within the computer. They operate on a network level, and you can use them to enforce security policies. A firewall can prevent processes from broadcasting information from a port. It can stop outside users from sending data to a port. And it can enforce more complex rules.
  • Simply installing and running a firewall does not make your host machine secure – it’s just one layer in the security cake. But it’s a vital and a powerful one.
  • First of all, we need to configure our software to resist common attacks. But that can only protect us from attacks we know about. Access control software, such as AppArmor, can drastically limit the damage caused by unauthorized access. But you still need to know an attack is in progress.
  • This is where Network Intrusion Detection Software (NIDS) is essential. It scans the incoming network traffic, looking for unusual patterns or signs of a known attack. If it sees anything suspicious, it logs an alert.
  • It’s up to you to review these logs and act on them.
  • If it’s a false alarm, you should tune your NIDS software to ignore it. If it’s an ineffective attack, you should review your security and block the attacker through the firewall.
  • That’s why it’s essential to have an automated backup system. Finally, you need to understand how the attack succeeded, so you can prevent it from recurring. You may have to change some settings on your Firewall, tighten your access rules, adjust your Apache configuration, and change settings in your wp-config file. None of this would be possible without detailed logs describing the attack.
  • Every web server has a breaking point and dedicated DOS attackers are willing to increase the load until your server buckles. Good firewalls offer some level of protection against naive DOS attacks
  • a tiny number of sites (less than 1%) are hacked through the WordPress core files
  • Major DNS attacks have taken down some of the biggest sites in the world – including Ebay and Paypal. Large hosting companies like Hostgator and Blue Host have been attacked. It’s a serious risk!
  • Right now, due to the way the web currently works, it’s impossible to download a web page without the IP address of a server. In the future, technologies like IFPS and MaidSafe could change that.
  • So there are 2 benefits to using a CDN. The first is that your content gets to your readers fast. The second benefit is server anonymity – nobody knows your real IP address – including the psychos. This makes it pretty impossible to attack your server – nobody can attack a server without an IP address.
  • When CDNs discover a DDOS attack, they have their own ways to deal with it. They often display a very lightweight “are you human?” message with a captcha. This tactic reduces the bandwidth costs and screens out the automated attacks.
  • If any of your DNS records point to your actual server, then it’s easy to find it and attack it. This includes A records (aliases) and MX records (mail exchange). You should also use a separate mail server machine to send your emails. Otherwise, your email headers will expose your real email address.
  • If your hosting company refuses to give you a new IP address, it may be time to find a new service provider.
  • WordPress uses encryption to store passwords in the database. It doesn’t store the actual password – instead, it stores an encrypted version. If someone steals your database tables, they won’t have the actual passwords.
  • If you used a simple hash function, a hacker could gain privileged access to your app in a short period of time.
  • The salt strings are stored in your site’s wp-config.php file.
  • Salts dramatically increase the time it would take to get a password out of a hash code – instead of taking a few weeks, it would take millions of years
  • You keep the other key (the decryption key) to yourself. If anyone stole it, they could decode your private messages! These 2-key cryptographic functions do exist. They are the basis of TLS (https) and SSH.
  • the most secure systems tend to be the simplest. The absolute secure machine would be one that was switched off.
  • For WordPress sites, you also need PHP and a database.
  • A VM is an emulated computer system running inside a real computer (the host). It contains its own operating system and resources, such as storage, and memory. The VM could run a completely different operating system from the host system – you could run OSX in a VM hosted on your Windows machine
  • This isolation offers a degree of protection. Let’s imagine your VM gets infected with a particularly nasty virus – the VM’s file system could be completely destroyed, or the data could be hopelessly corrupted. But the damage is limited to the VM itself. The host environment would remain safe.
  • This is how shared hosting and virtual private servers (VPSes) work today. Each customer has access to their own self-contained environment, within a virtual machine.
  • VMs are not just for hosting companies. If you’re hosting multiple sites on a dedicated server or a VPS, VMs can help to make your server more secure. Each site can live inside its own VM. That way, if one server is hacked, the rest of your sites are safe.
  • Even with all these considerations, the benefits of VMs outweigh their drawbacks. But performance is vital on the web.
  • Containers (like Docker) are very similar to VMs.
  • Because we’ve cut the hypervisor out of the loop, applications run much faster – almost as fast as processes in the host environment. Keeping each container separate does involve some computation by the container software. But it’s much lighter than the work required by a hypervisor!
  • Docker Cloud is a web-based service that automates the task for you. It integrates smoothly with the most popular cloud hosting platforms (such as Amazon Web Services, or Digital Ocean).
  • With containers, you can guarantee that the developer’s environment is exactly the same as the live server. Before the developer writes a single line of code, they can download the container to their computer. If the code works on their PC, it will work on the live server. This is a huge benefit of using containers, and it’s a major reason for their popularity.
  • A complete stack of these layers is called an “image”
  • The core of Docker is the Docker Engine – which lives inside a daemon – or long-running process
  • another great resource – the Docker Hub. The hub is an online directory of community-made images you can download and use in your own projects. These include Linux distributions, utilities, and complete applications.
  • Docker has established a relationship with the teams behind popular open source projects (including WordPress) – these partners have built official images that you can download and use as-is.
  • when you finish developing your code, you should wrap it up inside a complete container image. The goal is to put all the code that runs your site inside a container and store the volatile data in a volume.
  • Although Docker can help to make your site more secure, there’s are a few major issues you need to understand. The Docker daemon runs as a superuser It’s possible to load the entire filesystem into a container It’s possible to pass a reference to the docker daemon into a container
  • The solution to this issue is to use a MAC solution like SELinux, GRSecurity or AppArmor.
  • Never let anyone trick you into running a strange docker command.
  • only download and use Docker images from a trustworthy source. Official images for popular images are security audited by the Docker team. Community images are not
  • there are the core WordPress files. These interact with the web server through the PHP runtime. WordPress also relies on the file system and a database server.
  • A service is some software component that listens for requests (over a protocol) and does something when it receives those requests.
  • Using Docker, you could install WordPress, Apache, and PHP in one container, and run MySQL from another. These containers could run on the same physical machine, or on different ones
  • The database service container can be configured to only accept connections that originate from the web container. This immediately removes the threat of external attacks against your database server
  • This gives you the perfect opportunity to remove high-risk software from your host machine, including: Language Runtimes and interpreters, such as PHP, Ruby, Python, etc. Web servers Databases Mail Servers
  • If a new version of MySQL is released, you can update the database container without touching the web container. Likewise, if PHP or Apache are updated, you can update the web container and leave the database container alone.
  • Because Docker makes it easy to connect these containers together, there’s no reason to lump all your software inside a single container. In fact, it’s a bad practice – it increases the security risk for any single container, and it makes it harder to manage them.
  • If your site is already live on an existing server, the best approach is to set up a new host machine and then migrate over to it. Here are the steps you need to take:
  • With a minimal Ubuntu installation, you have a fairly bare-bones server. You also have the benefit of a huge repository of software you can install if you want.
  • If access control is like a lock protecting a building, intrusion detection is the security alarm that rings after someone breaks in.
  • Logging on to your host with a superuser account is a bad practice. It’s easy to accidentally break something.
  • Fail2ban blocks SSH users who fail the login process multiple times. You can also set it up to detect and block hack attempts over HTTP – this will catch hackers who attempt to probe your site for weaknesses.
  • With multiple WordPress sites on your machine, you have 2 choices. You could create a new database container for each, or you could reuse the same container between them. Sharing the DB container is a little riskier, as a hacker could, theoretically, ruin all your sites with one attack. You can minimize that risk by: Use a custom root user and password for your database – don’t use the default username of ‘root’. Ensuring the db container is not accessible over the internet (hide it away inside a docker network) Creating new databases and users for each WordPress site. Ensure each user only has permissions for their specific database.
  • What are the benefits of using a single database container? It’s easier to configure and scale. It’s easier to backup and recover your data. It’s a little lighter on resources.
  • you could also add a caching container, like Varnish. Varnish caches your content so it can serve pages quickly – much faster than WordPress can
  • Docker has the ability to limit how much processor time and memory each container gets. This protects you against exhaustion DOS attacks
  • A containerized process still has some of the abilities of root, making it more powerful than a regular user. But it’s not as bad as full-on root privileges. With AppArmor, you can tighten the security further, preventing the process from accessing any parts of the system that do not relate to serving your website.
  • Docker Hub works like GitHub – you can upload and download images for free. The downside is that there’s no security auditing. So it’s easy to download a trojan horse inside a container.
  • Official images (such as WordPress and Apache) are audited by the Docker team. These are safe. Community images (which have names like user/myapp) are not audited.
  • a kernel exploit executed inside a container will affect the entire system. The only way to protect against kernel exploits is to regularly update the host system
  • Containers run in isolation from the rest of the system. That does not mean you can neglect security – your website lives inside these containers! Even if a hacker cannot access the full system from a container, they can still damage the container’s contents.
  • Under Ubuntu, AppArmor already protects you – to a degree. The Docker daemon has an AppArmor profile, and each container runs under a default AppArmor profile. The default profile prevents an app from breaking out of the container, and restricts it from doing things that would harm the system as a whole. However, the default profile offers no specific protection against WordPress specific attacks. We can fix this by creating a custom profile for your WordPress container.
  • The net effect is that it’s impossible to install malware, themes or plugins through the web interface. We’ve already covered this to some degree with the .htaccess rules and directory permissions. Now we’re enforcing it through the Linux kernel.
  • There are versions of Docker for Mac and PC, so you’ll be able to run your site from your home machine. If the code works on your PC, it will also work on the server.
  • Tripwire tends to complain about the entries in the /proc filespace, which are auto-generated by the Linux kernel. These files contain information about running processes, and they tend to change rapidly while Linux runs your system. We don’t want to ignore the directory entirely, as it provides useful signs that an attack is in progress. So we’re going to have to update the policy to focus on the files we are interested in.
  • Now we should install an e-mail notification utility – to warn us if anything changes on the system. This will enable us to respond quickly if our system is compromised (depending on how often you check your emails).
  • Rootkits are malicious code that hackers install onto your machine. When they manage to get one on your server, it gives them elevated access to your system
  • Tripwire is configured to search in key areas. It’s good at detecting newly installed software, malicious sockets, and other signs of a compromised system. RKHunter looks in less obvious places, and it checks the contents of files to see if they contain known malicious code. RKHunter is supported by a community of security experts who keep it updated with known malware signatures – just like antivirus software for PCs.
  • If your hosting company offers the option, this would be a good point to make an image of your server. Most cloud hosting companies offer tools to do this.
  • With an image, it’s easy to launch new servers or recover the old one if things go horribly wrong.
  • We’ve hidden our server from the world while making it easy to read our content We’ve built a firewall to block malicious traffic We’ve trapped our web server inside a container where it can’t do any harm We’ve strengthened Linux’s access control model to prevent processes from going rogue We’ve added an intrusion detection system to identify corrupted files and processes We’ve added a rootkit scanner We’ve strengthened our WordPress installation with 2-factor authentication We’ve disabled the ability for any malicious user to install poisoned themes or plugins
  • Make a routine of checking the logs (or emails if you configured email reporting). It’s vital to act quickly if you see any warnings. If they’re false warnings, edit the configuration. Don’t get into a habit of ignoring the reports.
  • Virtually everything that happens on a Linux machine is logged.
  • You have to make a habit of checking for new exploits and learn how to protect yourself against them. Regularly check for security patches and issues in the core WordPress app: WordPress Security Notices Also, check regularly on the forums or mailing lists for the plugins and themes you use on your site.
  • network level intrusion detection service – you can fix that by installing Snort or PSAD.
  • The only way to guarantee your safety is to constantly update your security tactics and never get complacent.
Jennifer Williams

Nofollow Link Social Media | SEO Training - 0 views

  •  
    The Nofollow Link & Social Media Published by Your SEO Mentor under SEO, Social Media Aug 23 2008 There has been a lot of questions about how Social Media is affecting the SEO industry. The question I would like to ask is how can it help the SEO industry and how will affect the SEO for my clients sites and my own. The major issue with Social Media sites and how they play a role in your SEO these days is a majority of them (especially the big boys e.g. Twitter) use the Nofollow link. "Well your asking what does this mean and why do I need to worry about it." First of all don't worry about it, this is not the end of the world but what it means is that going to all these major social media and networking sites and linking back to your website will for the most part have no affect on your search engine results. The NoFollow link (e.g. ) was originally created to block search engines from following links in blog comments, this was due to the very high amount of blog comment spamming. The wonderful Wikipedia definition says, "nofollow is an HTML attribute value used to instruct some search engines that a hyperlink should not influence the link target's ranking in the search engine's index. It is intended to reduce the effectiveness of certain types of search engine spam, thereby improving the quality of search engine results and preventing spamdexing from occurring in the first place." With Social Media sites popping up daily and with them being very easy to place user generated content and links spammers began the same old routine and therefore we suffer from their actions. The top social media and networking sites quickly found that they too needed to use the nofollow attribute to help reduce the amount of spam submitted. So for the most part placing a link on Social Media sites will not directly help your search engine optimization efforts. That doesn't mean Social Media can not help in gaining valuable links to
  •  
    Current Top 20 Social Bookmarking sites that Dofollow
Rob Laporte

Why businesses should implement structured data - Search Engine Watch Search Engine Watch - 0 views

  • To enable your business Knowledge Graph card, you need to add the necessary Corporate Contact markup on the homepage of your website.
  • The easiest way to add a micro-markup to the site is to use the Schema plugin. It works with any available schema options and is embedded in the Yoast SEO plugin.
  • If the above-mentioned plugin doesn’t suit you, you can choose from a large number of WordPress plugins alternatives for schema markup. Here are some of them: All In One Schema Rich Snippets Schema JSON-LD Markup Rich Reviews WP SEO Structured Data Schema Markup (JSON-LD) structured in schema.org
Verilliance

Wordpress SEO plugins - All in One, Platinum, Yoast compared - 0 views

  •  
    Good comparison/review of available SEO plugin options for Wordpress.
Rob Laporte

Any Duplicate Content Filter For WordPress? - 0 views

  •  
    WordPress › WP-eDel post copies « WordPress Plugins
Verilliance

Using Yoast WordPress SEO - PageLines Support - 0 views

  •  
    Detailed set up for Yoast SEO plugin
Rob Laporte

How to Optimize Your LinkedIn Profile For High SEO Rankings - No B.S. Marketing Blog by... - 0 views

  •  
    Some of the featured applications you might choose to add are Amazon Reading List, WordPress (this allows you to connect your WordPress blog to your LinkedIn account), Company Buzz (an application that allows you to follow all relevant trends and comments about your company), Slide Share (which allows you to share your presentations with colleagues and LinkedIn connections), and Tweets, which gives you access to the basic elements of Twitter on your LinkedIn profile.
jack_fox

10 Facts You Think You Know About SEO That Are Actually Myths - 0 views

  • The duplicate content penalty doesn’t exist.
  • It isn’t helpful to focus on individual ranking signals because search engine algorithms are too sophisticated for this to be a useful way of conceptualizing algorithms.
  • Google’s algorithms have gotten better at understanding these types of low-quality backlinks and knowing when they should be ignored. As a result, the need for SEO pros to maintain and update a disavow file has diminished significantly
  • ...2 more annotations...
  • Now it is only recommended to make use of the disavow file when a site has received a manual action, in order to remove the offending links.
  • while there are plans to introduce a speed update later in 2018, Google only uses speed to differentiate between slow pages and those in the normal range.
jack_fox

Sitemaps & SEO: Are Sitemaps Still Important for SEO in 2019? - 0 views

  • <loc> and <lastmod>. These two tags are very important, so make sure you add them to your sitemap!
  • You can differentiate via the user agent and show an HTML sitemap instead if a real person visits the page.   Yoast SEO already does this. Visiting a /sitemap_index.xml file on a WordPress website will return an HTML sitemap, while hitting CTRL + U to view the source will return the actual XML sitemap.
  • Google limits that to only 10MB so make sure your file doesn’t have more than 50,000 URLs and 10MB
  • ...1 more annotation...
  • if you have an entire section of your website full with videos, then you might consider splitting them into a separate sitemap
jack_fox

Advanced Technical SEO: How social image sharing works and how to optimize your og:imag... - 0 views

  • It’s impossible to specify different images/formats/files for different networks, other than for Facebook and Twitter. The Facebook image is used, by default, for all other networks/systems). This is a limitation of how these platforms work. The same goes for titles and descriptions
  • The image size and cropping won’t always be perfect across different platforms, as the way in which they work is inconsistent.
  • Specifically, your images should look great on ‘broadcast’ platforms like Facebook and Twitter, but might sometimes crop awkwardly on platforms designed for 1:1 or small group conversations, like WhatsApp or Telegram.
  • ...11 more annotations...
  • For best results, you should manually specify og:image tags for each post, through the plugin. You should ensure that your primary og:image is between 1200x800px and 2000x1600px, and is less than 2mb in size.
  • As an open project, the Open Graph is constantly changing and improving
  • these tags and approaches sometimes conflict with or override each other. Twitter’s twitter:image property, for example, overrides an og:image value for images shared via Twitter, when both sets of tags are on the same page.
  • the open graph specification allows us to provide multiple og:image values. This, in theory, allows the platform to make the best decision about which size to use and allows people who are sharing some choice over which image they pick. How different platforms interpret these values, however, varies considerably
  • Because each platform maintains its own rules and documentation on how they treat og:image tags, there are often gaps in our knowledge. Specific restrictions, edge cases, and in particular, information on which rules override other rules, are rarely well-documented
  • we’re choosing to optimize the first image in the og:set for large, high-resolution sharing – the kind which Facebook supports and requires, but which cause issues with networks which expect a smaller image (like Instagram, or Telegram) sharing.
  • In the context of a newsfeed, like on Facebook or Twitter, the quality of the image is much more important – you’re scrolling through lots of noise, you’re less engaged, and a better image is an increased chance of a click/share/like. 
  • When the ‘full’ size image is over 2mb file size, and/or over 2000 pixels on either axis, we’ll try and fall back to a smaller standard WordPress image size (or to scan the post content for an alternative).
  • If we can’t find a suitable smaller image, we’ll omit the og:image tag, in the hopes that the platform will select an appropriate alternative. Note that this may result in the image not appearing in some sharing contexts.
  • If the ratio exceeds 3:1 we’ll present a warnin (this is the maximum ratio for many networks)
  • For most normal use-cases, we’d suggest that you manually set og:image values on your posts via the Yoast SEO plugin, and ensure that their dimensions are between 1200x800px and 2000x1600px (and that they’re less than 2mb in size)
Rob Laporte

JavaScript SEO - How Does Google Crawl JavaScript « SEOPressor - WordPress SE... - 0 views

  • 1. Googlebot crawls, Caffeine index and render. 2. For HTML web pages, Googlebot requests a page and downloads the HTML, contents are then indexed by Caffeine. 3. For JavaScript web pages, Googlebot request a page, downloads the HTML, first indexing happens. Caffeine then renders the page, send rendered links and data back to Googlebot for crawl queue, after re-crawl, cue second indexation. 4. Rendering is resource heavy and second indexation will be put on queue, which makes it less efficient. 5. Use the fetch and render tool on Google Search Console and Chrome 41 to gauge how good can Google index your JavaScript page.
1 - 20 of 55 Next › Last »
Showing 20 items per page